Lexical Functions for Ants Based Semantic Analysis
نویسندگان
چکیده
Semantic analysis (SA) is a central operation in natural language processing. We can consider it as the resolution of 5 problems: lexical ambiguity, references, prepositional attachments, interpretative paths and lexical functions instanciation. In this article, we show the importance of this last and explain why these tasks should be simultaneously carried out using thematic (conceptual vectors) and lexical (semantico-lexical network) information. We present an ant colony model which fulfill these criteria. We show the feasability of our approach using a small corpus and the contribution of lexical functions for solving the problem. This ant colony model offers new and interesting research perspectives. Many Natural Language Processing applications, like automatic summarization, information retrieval or machinal translation, can take advantage of semantic analysis (SA) which consists of, among other things, computing a thematic representation for the whole text and for its subparts. In our case, thematic information is computed as conceptual vectors which represent ideas and provide a quick estimation if texts, paragraphs, sentences or words are in the same semantic field, i.e. if they share ideas or not. At least five main problems should be solved during a SA. (1) lexical ambiguities (2) references i.e. resolving anaphora and identity referencing ; (3) prepositional attachments i.e. to find the syntactic head to which a prepositional phrase is linked ; (4) interpretation paths which concerns the resolution of compatible ambiguities; (5) the most important for us in this article, instanciation of lexical function (LF). LFs model typical relations between terms and include synonymy, the different types of antonymies, intensification (“strong fear”, “heavy rain”) or the typical relation of instrument (↪knife↩ is the typical instrument of ↪to cut↩, ↪shovel↩ of ↪to dig↩). In this article, we show that we need lexical functions to model world knowledge (“Napoleon was an emperor”) or language knowledge (↪destiny↩ is synonym of ↪fate↩) and the central role they play both in SA while contributing to the resolution of ambiguities mentioned earlier and also adressing specific problems of individual applications. We will see that their detection in texts require thematic and lexical information. Thematic information is handled using conceptual vectors which allows us to describe ideas contained in any textual segment (document, paragraph, sentence, phrase, . . . ) . Lexical information is addressed using a lexical network. Thus, our objective is to solve the five phenomena using a semantic lexical base whose lexical objects are linked to each others by typical relations and associated with conceptual vectors describing ideas they convey. Usually, resolution of these phenomena are done separatly. Thus, anaphora resolution, prepositional attchment problem and especially lexical disambiguation are independently studied. However, this is not the approach we adopt here. Instead, our work is based on the reasonable assumption that these ambiguities are often interdependent and that it would be advantageous to undertake these tasks in a holistic way. A way to holisticly deal with these various problems is to use a technique resulting from the distributed artificial intelligence, meta-heuristic of ant colony algorithms. Inspired by the collective behavior of biological ants, these algorithms are used to resolve difficult problems, in particular those related to graphs (TSP, partitionings, . . . ) and are used in operational research or to solve network routings problems. Ant colony algorithms are used in a different way for SA. It is not a method among others to solve a problem but rather a method which allows the simultaneous and interdependent resolution of these various tasks. Each ant caste corresponds to a heuristic which helps to solve a particular problem (in the model presented, detection of a particular lexical function) and has a behaviour influenced in part by the other ant activities. The environment is made up of both the text morpho-syntactic tree and a lexical network which contains typical relations between terms. We have one nest for each word meaning (acceptions) which competes during resource foraging. Ants build bridges between compatible acceptions which can be considered as sentence interpretations. We demonstrate the efficiency of this approach in order to solve SA problems. I. SEMANTIC ANALYSIS (SA) Five semantic phenomena can be solved during a SA: (1) Lexical Ambiguity : Words can have several meanings. This well-known phenomenon leads to one of the most important problems in NLP, lexical disambiguation (also often called Word Sense Disambiguation). It involves selecting the most appropriate acception of each word in the text. We define an acception as a particular meaning of a lexical item acknowledged and recognized by usage. It is a semantic unit acceptable in a given language. For example, we can consider that ↪mouse↩ has three acceptions: the nouns for the ↪computer device↩ and for the ↪rodent↩ and the verb for the ↪hunt↩ of the animal. Contrary to lexical items, acceptions are thus monosemantic. WSD is certainly a widely studied problem in SA [11]. For MT, it is essential to know which particular meaning is used in the source text because their translations are often different. For example, the English word ↪river↩ can be translated in French as ↪fleuve↩ or ↪rivière↩. In information retrieval, it helps to eliminate documents which contain only inappropriate senses of a word according to the request, thereby increasing recall and precision. (2) References : They are two types: (1)Anaphora is the phenomena whereby a pronoun is properly related to another element of the text. For example, in “The cat climbed onto the seat, then it began to sleep.”, "it" refers to "cat" and not to "seat". Anaphoric resolution in MT is important as it associates pronouns to content nouns. Indeed, genders often vary according to the language. Thus, anaphoric resolution can help to translate the word which supports it. Therefore, in French, "it" can be translated either as "il" (masculine), as here in our case, or "elle" (feminine) whereas in German it could be either "er", "sie" or "es" since German has three genders. Note that in German the pronoun would be ă"sie" (feminine) and not masculine, as in French (“Die Katze klaetterte auf den Sitz und (sie) begann dann zu schlafen”). (2) Identity stands when two words in a text are references to the same entity such as "cat" and "animal" in the sentences “The cat climbed onto the chair. The animal began to sleep.”. (3) Prepositional attachment concerns finding the dependence link between a prepositional phrase and a syntactic head (verb, noun, adjective) [9]. In “He sees the girl with a telescope.” the prepositional phrase “with a telescope” can be attached to the nominal phrase “the girl” or to the verbal phrase "see". This is crucial in MT especially for a language like English where prepositions considerably modify verb meaning. In “The man took a ferry across the river.”, the most logical attachment for ↪across↩ should be the verb ↪to take↩. We then have for French “L’homme traversa la rivière en ferry.”. The attachment to ↪ferry↩ gives another meaning and then gives as a translation “L’homme pris un ferry à travers la rivière.”. (4) Interpretation path : due to other ambiguities, a sentence can have several interpretations. Such ambiguities occur often especially if the text is short since there is less available information. [17] presents discussions and examples on this phenomenon. As an example, “The sentence is too long.” can be interpreted as a phrase with a non-trivial length or as a condemnation with a non-trivial duration. (5) instanciations of Lexical functions for analysis which is a central point of this article and is presented now. II. LEXICAL FUNCTIONS A. Lexical and World Knowledge The existence of a distinction between lexical knowledge (LK) and world knowledge (WK) has been the subject of a great debate particularly since the beginning of the 1980’s. According to John Haiman [10], there is no difference between the two, while Wierzbicka [22] argues that they are completely different. An interesting review can be found in Kornél Bangha’s PhD. thesis [1] about the status of lexical knowledge versus world knowledge in the process of interpretation. Here, we adopt an intermediary stand close to his one’s. We consider that knowledge can be divided into three categories: (1) WK which are not directly lexicalised. Thus they are not LK. For example, someone can know some facts of geography (Where is New York?), of history (How did JFK die?) or of everyday life (What is the color of a horse?). However, these information are not lexicalised and can be expressed only through statements; (2) WK which are directly lexicalised. As an example, the sentence “During monsoon season, Penang has heavy rain” is the representation in the real world of the amount of rainfall in Penang during Monsoon lexicalised thanks to ↪heavy↩; (3) some LK which can’t be considered as lexicalisation of WK. This is the case for grammatical gender in languages like French or German. Thus, the French lexical items ↪voiture↩ (↪car↩) and ↪mare↩ (↪pool↩) are feminine that does not correspond to any information on the objects. B. LF for Linguistic Knowledge (LFLK) LFLK are similar to Mel’čuk’s LF [15]. They model LFs which correspond to linguistic knowledge. One must be aware of the fact that these functions also represent a state of the world, but this state is represented by a particular, but arbitrary (synchronically) item in the language. Thus, the sentence “John had a strong fear” corresponds to the real world situation describing the intense fear experienced by John, and is lexicalised by the magnitude LF Magn and one of its values, ↪strong↩. There are two kinds of LFLK, paradigmatics which formalise classical semantic relations and syntagmatics which formalise collocations, “combinations of lexical items which prevail on others without any obvious logical reason.” [18]. In the first category we have: synonymy (Syn) which characterises different forms with a same meaning which is only given by use and without direct relationship to reality. Syn(↪plane↩)={↪airplane↩, ↪aeroplane↩,. . . }; antonymies (Anti ) which concern items whose semantic features are symetric relatively to an axis. Anti (↪life↩) = {↪death↩,. . . }; Anti (↪hot↩) = {↪cold↩,. . . } generics (Gener) which correspond to substitution hypernyms i.e. terms of the hierarchy which are preferred to others as reference by use. To illustrate, we do not say “The vehicle has landed” but “the aircraft has landed” so Gener(↪plane↩) = {↪aircraft↩} but not Gener(↪plane↩)6={↪vehicle↩}. This function is different from hypernymy where Hyper(↪plane↩)={↪aircraft↩,↪vehicle↩}. In the syntagmatics, we have, adjectival LF like intensification (Magn) or confirmation (Ver). Magn(↪tea↩)={↪strong↩}; Magn(↪rain↩)={↪heavy↩}; Ver(↪agreement↩)={↪good↩, ↪positive↩,. . . }; collective Mult(↪dog↩)={↪pack↩} and its opposite Sing Sing(↪rice↩)={↪grain↩} C. LF for the World Knowledge (LFWK) LFWK permit to model knowledge about the world. Among the LFWKs, we have, hypernymy (Hyper) which is the class hypernymy contrary to Gener which is the substitution hypernymy. As we have already mentioned, the world knowledge “a chair is a seat ” is retranscribed in language by the fact that ↪seat↩ is hypernym of ↪chair↩ which is a LK. Hyper(↪plane↩)={↪aircraft↩,↪vehicle↩,. . . }; it’s opposite relation, hyponymy. Hyponymy can be seen as the transcription in language of the property that a class is subclass of another. Hypo(↪aircraft↩)={↪plane↩}, Hypo(↪vehicle↩)={↪plane↩,↪car↩,↪boat↩}; instance(Inst) : Inst(↪writer↩)={↪Ernest Hemingway↩, ↪Victor Hugo↩, . . . }, Inst(↪horse↩) = {↪Tornado↩, ↪Black↩,. . . }; its opposite relation, Class : Class(↪Ernest Hemingway↩)={↪writer↩, ↪American↩, . . . }, Class(↪Black↩) = {↪horse↩,. . . }; meronymy (Mero), the part-of relation and its opposite holonymy (Holo). Mero(↪plane↩)= {↪fuselage↩, ↪wing↩,. . . }; verbal relations as instrument (Instr) which links an action to its typical instrument Instr(↪to dig↩)= {↪pick↩,. . . } Instr(↪to write↩)= {↪pen↩, ↪keyboard↩,. . . } the agent relation (agt) which links an action to its typical agent and patient which links an action to its typical patient which is influenced by it. agt(↪to eat↩)= ↪cat↩; pt(↪cat↩)= ↪food↩. D. Using of Lexical Functions 1) For Applications: Machine translation: Igor Mel’čuk introduced lexical functions in MT because he noticed that some terms are associated to others whereas their direct equivalents are not used to mark a similar idea. Thus, we speak of “grosse fièvre” in French but not of ∗"big fever" in English, where “high fever” will be used instead. These phenomena were thus model by lexical functions. They can be applied to any language in the same manner and are considered universal. In MT, LF can be used as an interlingua i.e. as an intermediate language. Information Retrieval: can be divided into two phases. The first one, documents indexing consists of building a computational representation for each document. The second one, the search phase, consists of transforming the request in similar representation and to extract the closest documents according to the given criteria. Lexical function can be useful to find synonymy of values. For example, we can imagine that the text representation does not directly refer to text segments like “a high fear” or “crushing majority” but rather to Magn(↪fear↩) and Magn(↪majority↩). Then, documents with “a high fear” or “a strong fear” and “crushing majority” or “landslide majority” would be more easily found than with simple distributional systems like SMART [19] or LSA [5]. 2) For solving semantic analysis Problems: LFs can provide some clues which can help in the various tasks discribed: Lexical Disambiguation : The two lexical function types can help us: (1) LFLK for identifying the syntagmatic relations between two words or at least to estimate its existence can help to identify the possible meanings for the corresponding lexical item. Thus, in “For his recent election to the senate, Mr Smith obtained a crushing majority.” ↪majority↩ can be partly disambiguated thanks to the LF Magn. Indeed, we can consider that ↪majority↩ can have as possible meanings the proportion which is related to the age, the vote or the one which is related to assembly but only Magn(majority/vote) = ↪crushing↩ and Magn(majority/assembly) = ↪crushing↩ exist. In the same way, synonyms or generics can indirectly contribute to the clarification via identity relation; (2) LFWK because they formalise world relations which can exist between the terms. Thus information such as “Renault has connection with cars” or “Napoleon was an emperor” can contribute to lexical disambiguation. Clarification can be done again here indirectly by identifying the identity relations thanks to hypernymy or instantiation. Identity Relations Identification : These relations are partly supported by equivalent terms in context. They can be synonyms but also hypernyms. Knowing or identifying these relations can thus be a determining element for the meaning reconstitution. III. LF INSTANCIATION:LEXICAL-THEMATIC INFO A. Thematic Information : Conceptual Vectors We represent thematic aspects of textual segments (documents, paragraph, phrases, etc) by conceptual vectors. Vectors have long been used in information retrieval [19] and for meaning representation in the LSI model [5] from latent semantic analysis (LSA) studies in psycholinguistics. In computational linguistics, [3] proposed a formalism for the projection of the linguistic notion of semantic field in a vectorial space, from which our model is inspired. From a set of elementary concepts, it is possible to build vectors (conceptual vectors) and to associate them to any linguistic object. This vector approach is based on known mathematical properties. It is thus possible to apply well founded formal manipulations associated to reasonable linguistic interpretations. Concepts are defined from a thesaurus (in our prototype applied to French, we used Larousse thesaurus [13] where 873 concepts are identified)). Let C be a finite set of n concepts, a conceptual vector V is a linear combinaison of elements ci of C. For a meaning A, a vector V (A) is the description (in extension) of activations of all concepts of C. For example, the different meanings of ↪door↩ could be projected on the following concepts (the CONCEPTdintensityc are ordered by decreasing values): V(↪door↩) = (OPENINGd0.8c, BARRIERd0.7c, LIMITd0.65c, . . . Comparison between conceptual vectors is done using angular distance. For two conceptual vectors A and B, DA(A,B) = arccos(Sim(A,B)) where Sim is Sim(X, Y ) = cos(X̂, Y ) = X·Y ‖X‖×‖Y ‖ . Intuitively, this function constitutes an evaluation of the thematic proximity and measures the angle between the two vectors. We would generally consider that, for a distance DA(A,B) ≤ π4 (45 ̊), A and B are thematically close and share many concepts. For DA(A,B) ≥ π4 , the thematic proximity between A and B would be considered as loose. Around π2 , they have no relation. DA is a real distance function. It verifies the properties of reflexivity, symmetry and triangular inequality. We have, for example, the following angles(values are in radian and degrees). DA(V(↪tit↩), V(↪tit↩)) = 0 (0 ̊); DA(V(↪tit↩), V(↪sparrow↩)) = 0.35 (20 ̊); DA(V(↪tit↩), V(↪bird↩)) = 0.55 (31 ̊); DA(V(↪tit↩), V(↪train↩)) = 1.28 (73 ̊); DA(V(↪tit↩), V(↪insect↩)) = 0.57 (32 ̊) The first one has a straightforward interpretation, as a ↪tit↩ cannot be closer to anything else than to itself. The second and the third are not very surprising either since a ↪tit↩ is a kind of ↪sparrow↩ which is a kind of ↪bird↩. A ↪tit↩ has not much in common with a ↪train↩, which explains the large angle between them. One may wonder why ↪tit↩ and ↪insect↩, are rather close with only 32ř between them. If we scrutinise the definition of ↪tit↩ from which its vector is computed (Insectivourous passerine bird with colorful feather.) perhaps the interpretation of these values would seem clearer. Indeed, the thematic distance is by no way an ontological distance. B. Limitation of Conceptual Vectors For LF Detection As shown in [2], distances computed on vectors are influenced by shared components and/or distinct components. Angular distance is a good tool for our aims because of its mathematical characteristics, its simplicity to understand and to linguistically interpret and ultimately allow it efficient implementation. Whatever chosen distance, used on this kind of vectors (represanting ideas and not term occurences), the smaller the distance, the bigger the number of lexical objects in the same semantic field (Rastier call it isotopy). In the framework of SA as outlined here, we use angular distance to take advantage of mutual information carried by conceptual vectors in order to make disambiguate words pertaining to the same or closely related semantic fields. Thus, “Zidane scored a goal.” can be disambiguated thanks to common ideas concerning sport, while “The lawyer pleads at the court.” can be disambiguated thanks to those of justice. Furthermore, vectors allow to attach properly prepositions due to knowledge about vision. For example, the prepositional phrase “with a telescope” would be attached to the verb “saw” in the sentence “He saw the girl with the telescope.”. On the contrary, conceptual vectors cannot be used to disambiguate terms pertaining to different semantic fields. Actually, an analysis solely based on them might lead to misinterpretation. For example, the French noun ↪avocat↩ has two meanings. It is the equivalent of ↪lawyer↩ and the equivalent of the fruit↪avocado↩. In the French sentence “L’avocat a mangé un fruit.”, “The lawyer has eaten a fruit”, ↪to eat↩ and ↪fruit↩ convey the idea of ↪food↩, hence the interpretation computed by conceptual vectors for ↪avocat↩ will be ↪avocado↩. It would have been good to realize that “a lawyer is a human” and “a human eats”, yet this is not possible by using only conceptual vectors. They are simply not sufficient to exploit the instanciation of lexical functions in texts, however, a lexical network can help to overcome these shortcomings. These kind of limitations have been shown in experiments for the SA using ant algorithms in [12]. C. Lexical Information : Lexical Networks 1) Principles: Natural language processing has used lexical networks for more than fourty years, with Ross Quillian’s work going back to the end of the sixtie’s [?]. Authors differ concerning the network type and the way to use them. Some authors use directly graph microstructures (cliques, hubs) while others use them indirectly through similarity operations and/or activation of nodes (neural networks, pagerank). The types of networks depends on entities chosen for nodes (lexical items, meanings, concepts) and on lexical relations chosen for edges. We can consider two families of lexical networks : (1) semantic lexical networks such as Quillan’s [4], or, more recently, [20], WordNet [7] where nodes correspond to lexical items, concepts or meanings and, usually, there are several kind of edges to qualify a relation (synonymy, antonymy, hypernymy, . . . ); (2) distributional lexical networks such as [21] where two terms are linked with an edge provided they cooccur in a corpus. In this kind of network there is only one type of edge. For SA, lexical networks are used only for lexical disambiguation. On the other hand, Jean Véronis, for example, showed that distributional networks are small worlds and used this property to find every possible meaning for a word [21]. He made partitions on graphs to extract the different components organised around a hub, a central node to which are linked terms used in a same context. For a SA, these components are exploited while searching for the partition containing the words in the co-text of the target term. With regard to the indirect use of the structure of the graph, it is done step by step by mutual activations and excitation of the nodes to cause compatible solution to emerge. [20], for example, use a technique inspired by "neural networks" on a graph made from dictionaries definitions while [16] built a network with words of a sentence and their possible meanings and edges weighted according to a similarity between definitions. Excitation of nodes is done with a pagerank algorithm. Very few authors use edge labels in their experiments. We have found only the Leacock and Chodorow measure [14] based on WordNet is-a relations. 2) Limits of Lexical Networks: All these methods help to solve only one of the problems mentionned, i.e. lexical ambiguity. They provide a way to make a preference concerning the meaning of each word of a text taken individually. This last feature makes it impossible to even obtain the compatible paths of interpretation. By their very nature, it is hard to imagine how to extend the above mentioned methods in order to solve at least one of the other problems. Indeed, they all consider that the important information to be found in the networks lie only in the node, whereas in reality they also lie in the edges. However, as mentionned in part II-D2, to find the relations between items in a statement can contribute to the resolution of other types of ambiguity (e.g. lexical ambiguity). Of course, this last comment has to be considered with respect to the specifically used networks. In the previous examples, none present both paradigmatic and syntagmatic information as the network we manage to build. Nevertheless, some research converges towards this idea. Syntagmatic information is crucially lacking in a network like WordNet. This phenomenon is known as the tennis problem. The lexical item ↪racket↩ is in one area while ↪court↩ and ↪player↩ are in others. Of course this is true, no matter what field chosen. Syntagmatic and paradigmatic relations are essential for natural and flexible access to the words and their meaning. Michael Zock and Olivier Ferret have made a very interesting proposal in this respect [8]. D. Hybrid Representation of Meaning : Mixing Conceptual Vectors and Lexical Network While lexical networks offer unquestionable precision, their recall is poor. It is difficult to represent all possible relations between all terms. Indeed, how can we represent the fact that two terms are in the same semantic field? They may be absent from the network because they are not connected by “traditional” arcs. Introducing arcs of the type “semantic field” is also problematic for us because of two reasons implicated by the fuzzy and flexible of this relation: (1) the first one is related to the database creator’s understanding on this relation: when are two synsets considered to be in the same semantic field? In an unfavourable case there would be very few arcs, while in the extreme opposite case we could have a combinative explosion in the number of arcs; (2) the second and more fundamental problem is related to the representation itself. How could a fuzzy relation, the essence of which is a continuous field, be represented with discrete elements? Thus, the continuous domain offered by conceptual vectors gives flexibilities that the discrete domain offered by the networks cannot. They are able to bring closer words which share ideas, including less common ones. A network, on the other hand, cannot do so, however common the ideas are. The conceptual vectors and the operation of thematic distance can correct the weak recall inherent of the lexical networks. This, then, is why conceptual vectors and lexical networks are complementary tools to each other: the defects of one are mitigated by qualities of the other. IV. ANT ALGORITHMS AND SA It has been demonstrated that cooperation inside an ant colony is self-organisated. It results of simple interactions between individuals which allow the colony to solve complicated problems. This phenomenon is called swarm intelligence and is more and more used in computer science where centralised control systems are often successfully replaced by other types of control based on interactions between simple elements [6]. In these algorithms, the environment is usually represented by a graph. Virtual ants exploit pheromone deposited by others and pseudo-randomly explore the graph. Pheromone quantity plays the role of heuristic. These algorithms are a good alternative for the graph modelised problems resolution. They allow fast and efficient walkthrough close to other resolution methods. Their main interest is their important ability to adapt themselves to changing environment. We think that phenomena to be addressed for a proper SA should be globally considered for at least two reasons. (1) They are dependent on each other. We exemplified it with Lexical Functions in II-D2 and this demonstration can be easily extended to other phenomena. (2) It is problematic to combine expertises with a supervisor. Criteria are often contradictory and their possible weighting are function of the others (again because they are related). Finally, the bottleneck is not only the expert agent conception but the precise definition of an aggregate function for the returned values. Ants algorithms constitute an easy and efficient way to handle SA issues in a hollistic manner. Each ant caste is associated to heuristics intended to solve a particular problem (in the presented model, to instanciate a LF type) and thus has its own behavior partly influenced by other castes. The idea is to constitute a beam of clues which causes one (or several) compatible solutions to emerge. Thus, when elements needed for an ambiguity resolution are present, solving one problem is able to help in the resolution of another. In this way, somewhat like domino theory, resolution is done progressively.
منابع مشابه
Preferred Lexical Access Route in Persian Learners of English: Associative, Semantic or Both
Background: Words in the Mental Lexicon (ML) construct semantic field through associative and/ or semantic connections, with a pervasive native speaker preference for the former. Non-native preferences, however, demand further inquiry. Previous studies have revealed inconsistent Lexical Access (LA) patterns due to the limitations in the methodology and response categorization. Objectives: To f...
متن کاملDeveloping a Semantic Similarity Judgment Test for Persian Action Verbs and Non-action Nouns in Patients With Brain Injury and Determining its Content Validity
Objective: Brain trauma evidences suggest that the two grammatical categories of noun and verb are processed in different regions of the brain due to differences in the complexity of grammatical and semantic information processing. Studies have shown that the verbs belonging to different semantic categories lead to neural activity in different areas of the brain, and action verb processing is r...
متن کاملLexical Semantics and Selection of TAM in Bantu Languages: A Case of Semantic Classification of Kiswahili Verbs
The existing literature on Bantu verbal semantics demonstrated that inherent semantic content of verbs pairs directly with the selection of tense, aspect and modality formatives in Bantu languages like Chasu, Lucazi, Lusamia, and Shiyeyi. Thus, the gist of this paper is the articulation of semantic classification of verbs in Kiswahili based on the selection of TAM types. This is because the sem...
متن کاملSemantic Analysis of Verbal Collocations with Lexical Functions
Thank you for reading semantic analysis of verbal collocations with lexical functions. Maybe you have knowledge that, people have look numerous times for their chosen novels like this semantic analysis of verbal collocations with lexical functions, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they are facing with some harmful vi...
متن کاملNative and Non-native Use of Lexical Bundles in Discussion Section of Political Science Articles
The study of lexical bundles, among types of text analysis, is gaining importance over the others in the last century. The present study employed a frequency-based analysis approach to the use of lexical bundles. The discussion section of 60 political science articles, with corpora around 253,063 words were investigated in three aspects of structure, form, and function of lexical bundles. The p...
متن کاملComparative Effectiveness of Semantic Feature Analysis (SFA) and Phonological Components Analysis (PCA) for Anomia Treatment in Persian Speaking Patients With Aphasia
Objectives: Anomia is one of the most common and persistent symptoms of aphasia. Although treatments of anomia usually focus on semantic and/or phonological levels, which both have been demonstrated to be effective, the relationship between the underlying functional deficit in naming and response to a particular treatment approach remains unclear. The aim of this study was to determine the rela...
متن کامل